16 research outputs found
Distributed Average Consensus under Quantized Communication via Event-Triggered Mass Summation
We study distributed average consensus problems in multi-agent systems with
directed communication links that are subject to quantized information flow.
The goal of distributed average consensus is for the nodes, each associated
with some initial value, to obtain the average (or some value close to the
average) of these initial values. In this paper, we present and analyze a
distributed averaging algorithm which operates exclusively with quantized
values (specifically, the information stored, processed and exchanged between
neighboring agents is subject to deterministic uniform quantization) and relies
on event-driven updates (e.g., to reduce energy consumption, communication
bandwidth, network congestion, and/or processor usage). We characterize the
properties of the proposed distributed averaging protocol on quantized values
and show that its execution, on any time-invariant and strongly connected
digraph, will allow all agents to reach, in finite time, a common consensus
value represented as the ratio of two integer that is equal to the exact
average. We conclude with examples that illustrate the operation, performance,
and potential advantages of the proposed algorithm
Online Distributed Learning with Quantized Finite-Time Coordination
In this paper we consider online distributed learning problems. Online
distributed learning refers to the process of training learning models on
distributed data sources. In our setting a set of agents need to cooperatively
train a learning model from streaming data. Differently from federated
learning, the proposed approach does not rely on a central server but only on
peer-to-peer communications among the agents. This approach is often used in
scenarios where data cannot be moved to a centralized location due to privacy,
security, or cost reasons. In order to overcome the absence of a central
server, we propose a distributed algorithm that relies on a quantized,
finite-time coordination protocol to aggregate the locally trained models.
Furthermore, our algorithm allows for the use of stochastic gradients during
local training. Stochastic gradients are computed using a randomly sampled
subset of the local training data, which makes the proposed algorithm more
efficient and scalable than traditional gradient descent. In our paper, we
analyze the performance of the proposed algorithm in terms of the mean distance
from the online solution. Finally, we present numerical results for a logistic
regression task.Comment: To be presented at IEEE CDC'2
Distributed Optimization via Gradient Descent with Event-Triggered Zooming over Quantized Communication
In this paper, we study unconstrained distributed optimization strongly
convex problems, in which the exchange of information in the network is
captured by a directed graph topology over digital channels that have limited
capacity (and hence information should be quantized). Distributed methods in
which nodes use quantized communication yield a solution at the proximity of
the optimal solution, hence reaching an error floor that depends on the
quantization level used; the finer the quantization the lower the error floor.
However, it is not possible to determine in advance the optimal quantization
level that ensures specific performance guarantees (such as achieving an error
floor below a predefined threshold). Choosing a very small quantization level
that would guarantee the desired performance, requires {information} packets of
very large size, which is not desirable (could increase the probability of
packet losses, increase delays, etc) and often not feasible due to the limited
capacity of the channels available. In order to obtain a
communication-efficient distributed solution and a sufficiently close proximity
to the optimal solution, we propose a quantized distributed optimization
algorithm that converges in a finite number of steps and is able to adjust the
quantization level accordingly. The proposed solution uses a finite-time
distributed optimization protocol to find a solution to the problem for a given
quantization level in a finite number of steps and keeps refining the
quantization level until the difference in the solution between two successive
solutions with different quantization levels is below a certain pre-specified
threshold
Finite-Time Distributed Optimization with Quantized Gradient Descent
In this paper, we consider the unconstrained distributed optimization
problem, in which the exchange of information in the network is captured by a
directed graph topology, and thus nodes can send information to their
out-neighbors only. Additionally, the communication channels among the nodes
have limited bandwidth, to alleviate the limitation, quantized messages should
be exchanged among the nodes. For solving the distributed optimization problem,
we combine a distributed quantized consensus algorithm (which requires the
nodes to exchange quantized messages and converges in a finite number of steps)
with a gradient descent method. Specifically, at every optimization step, each
node performs a gradient descent step (i.e., subtracts the scaled gradient from
its current estimate), and then performs a finite-time calculation of the
quantized average of every node's estimate in the network. As a consequence,
this algorithm approximately mimics the centralized gradient descent algorithm.
The performance of the proposed algorithm is demonstrated via simple
illustrative examples
Asynchronous Distributed Optimization via ADMM with Efficient Communication
In this paper, we focus on an asynchronous distributed optimization problem.
In our problem, each node is endowed with a convex local cost function, and is
able to communicate with its neighbors over a directed communication network.
Furthermore, we assume that the communication channels between nodes have
limited bandwidth, and each node suffers from processing delays. We present a
distributed algorithm which combines the Alternating Direction Method of
Multipliers (ADMM) strategy with a finite time quantized averaging algorithm.
In our proposed algorithm, nodes exchange quantized valued messages and operate
in an asynchronous fashion. More specifically, during every iteration of our
algorithm each node (i) solves a local convex optimization problem (for the one
of its primal variables), and (ii) utilizes a finite-time quantized averaging
algorithm to obtain the value of the second primal variable (since the cost
function for the second primal variable is not decomposable). We show that our
algorithm converges to the optimal solution at a rate of (where is
the number of time steps) for the case where the local cost function of every
node is convex and not-necessarily differentiable. Finally, we demonstrate the
operational advantages of our algorithm against other algorithms from the
literature